16 research outputs found

    Scalable exploration of 3D massive models

    Get PDF
    Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] Esta tese presenta unha serie técnicas escalables que avanzan o estado da arte da creación e exploración de grandes modelos tridimensionaies. No ámbito da xeración destes modelos, preséntanse métodos para mellorar a adquisición e procesado de escenas reais, grazas a unha implementación eficiente dun sistema out- of- core de xestión de nubes de puntos, e unha nova metodoloxía escalable de fusión de datos de xeometría e cor para adquisicións con oclusións. No ámbito da visualización de grandes conxuntos de datos, que é o núcleo principal desta tese, preséntanse dous novos métodos. O primeiro é unha técnica adaptabile out-of-core que aproveita o hardware de rasterización da GPU e as occlusion queries para crear lotes coherentes de traballo, que serán procesados por kernels de trazado de raios codificados en shaders, permitindo out-of-core ray-tracing con sombreado e iluminación global. O segundo é un método de compresión agresivo que aproveita a redundancia xeométrica que se adoita atopar en grandes modelos 3D para comprimir os datos de forma que caiban, nun formato totalmente renderizable, na memoria da GPU. O método está deseñado para representacións voxelizadas de escenas 3D, que son amplamente utilizadas para diversos cálculos como para acelerar as consultas de visibilidade na GPU. A compresión lógrase fusionando subárbores idénticas a través dunha transformación de similitude, e aproveitando a distribución non homoxénea de referencias a nodos compartidos para almacenar punteiros aos nodos fillo, e utilizando unha codificación de bits variable. A capacidade e o rendemento de todos os métodos avalíanse utilizando diversos casos de uso do mundo real de diversos ámbitos e sectores, incluídos o patrimonio cultural, a enxeñería e os videoxogos.[Resumen] En esta tesis se presentan una serie técnicas escalables que avanzan el estado del arte de la creación y exploración de grandes modelos tridimensionales. En el ámbito de la generación de estos modelos, se presentan métodos para mejorar la adquisición y procesado de escenas reales, gracias a una implementación eficiente de un sistema out-of-core de gestión de nubes de puntos, y una nueva metodología escalable de fusión de datos de geometría y color para adquisiciones con oclusiones. Para la visualización de grandes conjuntos de datos, que constituye el núcleo principal de esta tesis, se presentan dos nuevos métodos. El primero de ellos es una técnica adaptable out-of-core que aprovecha el hardware de rasterización de la GPU y las occlusion queries, para crear lotes coherentes de trabajo, que serán procesados por kernels de trazado de rayos codificados en shaders, permitiendo renders out-of-core avanzados con sombreado e iluminación global. El segundo es un método de compresión agresivo, que aprovecha la redundancia geométrica que se suele encontrar en grandes modelos 3D para comprimir los datos de forma que quepan, en un formato totalmente renderizable, en la memoria de la GPU. El método está diseñado para representaciones voxelizadas de escenas 3D, que son ampliamente utilizadas para diversos cálculos como la aceleración las consultas de visibilidad en la GPU o el trazado de sombras. La compresión se logra fusionando subárboles idénticos a través de una transformación de similitud, y aprovechando la distribución no homogénea de referencias a nodos compartidos para almacenar punteros a los nodos hijo, utilizando una codificación de bits variable. La capacidad y el rendimiento de todos los métodos se evalúan utilizando diversos casos de uso del mundo real de diversos ámbitos y sectores, incluidos el patrimonio cultural, la ingeniería y los videojuegos.[Abstract] This thesis introduces scalable techniques that advance the state-of-the-art in massive model creation and exploration. Concerning model creation, we present methods for improving reality-based scene acquisition and processing, introducing an efficient implementation of scalable out-of-core point clouds and a data-fusion approach for creating detailed colored models from cluttered scene acquisitions. The core of this thesis concerns enabling technology for the exploration of general large datasets. Two novel solutions are introduced. The first is an adaptive out-of-core technique exploiting the GPU rasterization pipeline and hardware occlusion queries in order to create coherent batches of work for localized shader-based ray tracing kernels, opening the door to out-of-core ray tracing with shadowing and global illumination. The second is an aggressive compression method that exploits redundancy in large models to compress data so that it fits, in fully renderable format, in GPU memory. The method is targeted to voxelized representations of 3D scenes, which are widely used to accelerate visibility queries on the GPU. Compression is achieved by merging subtrees that are identical through a similarity transform and by exploiting the skewed distribution of references to shared nodes to store child pointers using a variable bitrate encoding The capability and performance of all methods are evaluated on many very massive real-world scenes from several domains, including cultural heritage, engineering, and gaming

    Crack Detection in Single- and Multi-Light Images of Painted Surfaces using Convolutional Neural Networks

    Get PDF
    Cracks represent an imminent danger for painted surfaces that needs to be alerted before degenerating into more severe aging effects, such as color loss. Automatic detection of cracks from painted surfaces' images would be therefore extremely useful for art conservators; however, classical image processing solutions are not effective to detect them, distinguish them from other lines or surface characteristics. A possible solution to improve the quality of crack detection exploits Multi-Light Image Collections (MLIC), that are often acquired in the Cultural Heritage domain thanks to the diffusion of the Reflectance Transformation Imaging (RTI) technique, allowing a low cost and rich digitization of artworks' surfaces. In this paper, we propose a pipeline for the detection of crack on egg-tempera paintings from multi-light image acquisitions and that can be used as well on single images. The method is based on single or multi-light edge detection and on a custom Convolutional Neural Network able to classify image patches around edge points as crack or non-crack, trained on RTI data. The pipeline is able to classify regions with cracks with good accuracy when applied on MLIC. Used on single images, it can give still reasonable results. The analysis of the performances for different lighting directions also reveals optimal lighting directions

    Aplicación para la inspección espacial, volumétrica y seccional interactiva de la Catedral de Santiago de Compostela

    Full text link
    [ES] El presente artículo describe el proceso de diseño, producción e implementación de una aplicación destinada a permitir el análisis formal interactivo de la Catedral de Santiago de Compostela. La complejidad geométrica del modelo edificio para el detalle requerido, derivada principalmente de la profusión de elementos estilísticospresentes en el mismo y que constituye al fin y al cabo una de sus señas características, impuso utilizar soluciones basadas en cálculo de radiosidad por refinamiento progresivo para la generación de un modelo que pudiese ser manipulable en tiempo real, con calidad visual de iluminación global, a la vez que seccionable interactivamente mediante interacción multitáctil.[EN] This paper describes the design, production and implementation of an application for the formal analysis of the Cathedral of Santiago de Compostela. The geometrical complexity of the model of this building for the level of detail required, derived from the profusion of stylistic elements present, that constitutes one of its signs of identity leaded to use the progressive refinement radiosity method to generate a model which could be handled in real-time, adding the visual quality of globalillumination, to be implemented in an application that allows the user to interactively inspect and cross-section the model.Barneche Naya, V.; Hernández Ibáñez, LA.; Jaspe Villanueva, A.; Fariña Fernández, G. (2012). Aplicación para la inspección espacial, volumétrica y seccional interactiva de la Catedral de Santiago de Compostela. Virtual Archaeology Review. 3(6):78-82. https://doi.org/10.4995/var.2012.4448OJS788236CONANT, KENNETH, J. (1983): "Arquitectura románica da Catedral de Santiago de Compostela" Ed. Colexio Oficial de Arquitectos de Galicia. Vigo.FRANCO TABOADA, J.A, y TARRIO CARRODEGUAS, S. (1999): "As Catedrais de Galicia. Descrición Gráfica". Departamento de Representación e Teoría Arquitectónicas. Ed. Xunta de Galicia. Santiago de Compostela.ILUX (2010): "Catedral, Libro de Piedra" Ilux Visual Technologies. [on-line] [Consulta 15-04-2011] http://www.ilux.es/es/catedralMARTIN, S. & EINARSSON, P. (2010): "Real Time Radiosity Architecture" Presentación en el curso "Advances in Real-Time Rendering" en SIGGRAPH 2010 [online] [Consulta 15-04-2011]http://publications.dice.se/attachments/Siggraph10-ARR-RealtimeRadiosityArchitecture.pptMARTZ. P. et al. (2007): "OpenSceneGraph Reference Manual v2.2", Ed. Kuehne. B. and Marktz. P [online] [Consulta 15-04-2011] http://www.osgbooks.com/books/osg_refman22.htmlTAÍN GUZMÁN, M. (1999): "Trazas, Planos y Proyectos del Archivo de la Catedral de Santiago", Ed. Diputación Provincial de A Coruña

    Artworks in the spotlight: characterization with a multispectral LED dome

    Get PDF
    We describe the design and realization of a novel multispectral light dome system and the associated software control and calibration tools used to process the acquired data, in a specialized pipeline geared towards the analysis of shape and appearance properties of cultural heritage items. The current prototype dome, built using easily available electronic and lighting components, can illuminate a target of size 20cm x 20cm from 52 directions uniformly distributed in a hemisphere. From each illumination direction, 3 LED lights cover the visible range of the electromagnetic spectrum, as well as long ultraviolet and near infrared. A dedicated control system implemented on Arduino boards connected to a controlling PC fully manages all lighting and a camera to support automated acquisition. The controlling software also allows real-time adjustment of the LED settings, and provides a live-view of the to-be-captured scene. We approach per-pixel light calibration by placing dedicated targets in the focal plane: four black reflective spheres for back-tracing the position of the LED lamps and a planar full- frame white paper to correct for the non-uniformity of radiance. Once the light calibration is safeguarded, the multispectral acquisition of an artwork can be completed in a matter of minutes, resulting in a spot-wise appearance profile, that stores at pixel level the per-frequency intensity value together with the light direction vector. By performing calibrated acquisition of multispectral Reflectance Transformation Imaging (RTI), with our analysis system it is possible to recover surface normals, to characterize matte and specular behavior of materials, and to explore different surface layers thanks to UV-VIS-IR LED light separation. To demonstrate the system features we present the outcomes of the on-site capture of metallic artwork at the National Archaeological Museum of Cagliari, Sardini

    Web-based Multi-layered Exploration of Annotated Image-based Shape and Material Models

    No full text
    We introduce a novel versatile approach for letting users explore detailed image-based shape and material models integrated with structured, spatially-associated descriptive information. We represent the objects of interest as a series of registered layers of image-based shape and material information. These layers are represented at multiple scales, and can come out of a variety of pipelines and include both RTI representations and spatially-varying normal and BRDF fields, eventually as a result of fusing multi-spectral data. An overlay image pyramid associates visual annotations to the various scales. The overlay pyramid of each layer can be easily authored at data preparation time using widely available image editing tools. At run-time, an annotated multi-layered dataset is made available to clients by a standard web server. Users can explore these datasets on a variety of devices, from mobile phones to large scale displays in museum installations, using JavaScript/WebGL2 clients capable to perform layer selection, interactive relighting and enhanced visualization, annotation display, and focus-and-context multiple-layer exploration using a lens metaphor. The capabilities of our approach are demonstrated on a variety of cultural heritage use cases involving different kinds of annotated surface and material models

    Robust reconstruction of interior building structures with multiple rooms under clutter and occlusions

    Get PDF
    We present a robust approach for reconstructing the architectural structure of complex indoor environments given a set of cluttered input scans. Our method first uses an efficient occlusion-aware process to extract planar patches as candidate walls, separating them from clutter and coping with missing data. Using a diffusion process to further increase its robustness, our algorithm is able to reconstruct a clean architectural model from the candidate walls. To our knowledge, this is the first indoor reconstruction method which goes beyond a binary classification and automatically recognizes different rooms as separate components. We demonstrate the validity of our approach by testing it on both synthetic models and real-world 3D scans of indoor environments

    Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts

    Get PDF
    We present a robust approach for reconstructing the main architectural structure of complex indoor environments given a set of cluttered 3D input range scans. Our method uses an efficient occlusion-aware process to extract planar patches as candidate walls, separating them from clutter and coping with missing data, and automatically extracts the individual rooms that compose the environment by applying a diffusion process on the space partitioning induced by the candidate walls. This diffusion process, which has a natural interpretation in terms of heat propagation, makes our method robust to artifacts and other imperfections that occur in typical scanned data of interiors. For each room, our algorithm reconstructs an accurate polyhedral model by applying methods from robust statistics. We demonstrate the validity of our approach by evaluating it on both synthetic models and real-world 3D scans of indoor environments

    Reconstructing complex indoor environments with arbitrary walls orientations

    Full text link
    Reconstructing the architectural shape of interiors is a problem that is gaining increasing attention in the field of computer graphics. Some solutions have been proposed in recent years, but cluttered environments with multiple rooms and non-vertical walls still represent a challenge for state-of-the-art methods. We propose an occlusions- aware pipeline that extends current solutions to work with complex environments with arbitrary wall orientations

    SOAR: Stochastic optimization for affine global point set registration

    Full text link
    We introduce a stochastic algorithm for pairwise affine registration of partially overlapping 3D point clouds with unknown point correspondences. The algorithm recovers the globally optimal scale, rotation, and translation alignment parameters and is applicable in a variety of difficult settings, including very sparse, noisy, and outlier-ridden datasets that do not permit the computation of local descriptors. The technique is based on a stochastic approach for the global optimization of an alignment error function robust to noise and resistant to outliers. At each optimization step, it alternates between stochastically visiting a generalized BSP-tree representation of the current solution landscape to select a promising transformation, finding point-to-point correspondences using a GPU-accelerated technique, and incorporating new error values in the BSP tree. In contrast to previous work, instead of simply constructing the tree by guided random sampling, we exploit the problem structure through a low-cost local minimization process based on analytically solving absolute orientation problems using the current correspondences. We demonstrate the quality and performance of our method on a variety of large point sets with different scales, resolutions, and noise characteristics

    Robust Reconstruction of Interior Building Structures with Multiple Rooms under Clutter and Occlusions

    No full text
    Abstract—We present a robust approach for reconstructing the architectural structure of complex indoor environments given a set of cluttered input scans. Our method first uses an efficient occlusion-aware process to extract planar patches as candidate walls, separating them from clutter and coping with missing data. Using a diffusion process to further increase its robustness, our algorithm is able to reconstruct a clean architectural model from the candidate walls. To our knowledge, this is the first indoor reconstruction method which goes beyond a binary classification and automatically recognizes different rooms as separate components. We demonstrate the validity of our approach by testing it on both synthetic models and real-world 3D scans of indoor environments. Keywords-indoor scene reconstruction; LIDAR reconstruction; point cloud processing I
    corecore